134 research outputs found

    Evolving stochastic learning algorithm based on Tsallis entropic index

    Get PDF
    In this paper, inspired from our previous algorithm, which was based on the theory of Tsallis statistical mechanics, we develop a new evolving stochastic learning algorithm for neural networks. The new algorithm combines deterministic and stochastic search steps by employing a different adaptive stepsize for each network weight, and applies a form of noise that is characterized by the nonextensive entropic index q, regulated by a weight decay term. The behavior of the learning algorithm can be made more stochastic or deterministic depending on the trade off between the temperature T and the q values. This is achieved by introducing a formula that defines a time-dependent relationship between these two important learning parameters. Our experimental study verifies that there are indeed improvements in the convergence speed of this new evolving stochastic learning algorithm, which makes learning faster than using the original Hybrid Learning Scheme (HLS). In addition, experiments are conducted to explore the influence of the entropic index q and temperature T on the convergence speed and stability of the proposed method

    Multicriteria decision making for enhanced perception-based multimedia communication

    Get PDF
    This paper proposes an approach that integrates technical concerns with user perceptual considerations for intelligent decision making in the construction of tailor-made multimedia communication protocols. Thus, the proposed approach, based on multicriteria decision making (MDM), incorporates not only classical networking considerations, but, indeed, user preferences as well. Furthermore, in keeping with the task-dependent nature consistently identified in multimedia scenarios, the suggested communication protocols also take into account the type of multimedia application that they are transporting. Lastly, this approach also opens the possibility for such protocols to dynamically adapt based on a changing operating environment and user's preferences

    Distributed computing methodology for training neural networks in an image-guided diagnostic application

    Get PDF
    Distributed computing is a process through which a set of computers connected by a network is used collectively to solve a single problem. In this paper, we propose a distributed computing methodology for training neural networks for the detection of lesions in colonoscopy. Our approach is based on partitioning the training set across multiple processors using a parallel virtual machine. In this way, interconnected computers of varied architectures can be used for the distributed evaluation of the error function and gradient values, and, thus, training neural networks utilizing various learning methods. The proposed methodology has large granularity and low synchronization, and has been implemented and tested. Our results indicate that the parallel virtual machine implementation of the training algorithms developed leads to considerable speedup, especially when large network architectures and training sets are used

    Improved processing of microarray data using image reconstruction techniques

    Get PDF
    Spotted cDNA microarray data analysis suffers from various problems such as noise from a variety of sources, missing data, inconsistency, and, of course, the presence of outliers. This paper introduces a new method that dramatically reduces the noise when processing the original image data. The proposed approach recreates the microarray slide image, as it would have been with all the genes removed. By subtracting this background recreation from the original, the gene ratios can be calculated with more precision and less influence from outliers and other artifacts that would normally make the analysis of this data more difficult. The new technique is also beneficial, as it does not rely on the accurate fitting of a region to each gene, with its only requirement being an approximate coordinate. In experiments conducted, the new method was tested against one of the mainstream methods of processing spotted microarray images. Our method is shown to produce much less variation in gene measurements. This evidence is supported by clustering results that show a marked improvement in accuracy

    Intelligent synthesis mechanism for deriving streaming priorities of multimedia content

    Get PDF
    We address the problem of integrating user preferences with network quality of service parameters for the streaming of media content, and suggest protocol stack configurations that satisfy user and technical requirements to the best available degree. Our approach is able to handle inconsistencies between user and networking considerations, formulating the problem of construction of tailor-made protocols as a prioritization problem, solvable using fuzzy programming

    Improved sign-based learning algorithm derived by the composite nonlinear Jacobi process

    Get PDF
    In this paper a globally convergent first-order training algorithm is proposed that uses sign-based information of the batch error measure in the framework of the nonlinear Jacobi process. This approach allows us to equip the recently proposed Jacobi–Rprop method with the global convergence property, i.e. convergence to a local minimizer from any initial starting point. We also propose a strategy that ensures the search direction of the globally convergent Jacobi–Rprop is a descent one. The behaviour of the algorithm is empirically investigated in eight benchmark problems. Simulation results verify that there are indeed improvements on the convergence success of the algorithm

    Neuro-fuzzy knowledge processing in intelligent learning environments for improved student diagnosis

    Get PDF
    In this paper, a neural network implementation for a fuzzy logic-based model of the diagnostic process is proposed as a means to achieve accurate student diagnosis and updates of the student model in Intelligent Learning Environments. The neuro-fuzzy synergy allows the diagnostic model to some extent "imitate" teachers in diagnosing students' characteristics, and equips the intelligent learning environment with reasoning capabilities that can be further used to drive pedagogical decisions depending on the student learning style. The neuro-fuzzy implementation helps to encode both structured and non-structured teachers' knowledge: when teachers' reasoning is available and well defined, it can be encoded in the form of fuzzy rules; when teachers' reasoning is not well defined but is available through practical examples illustrating their experience, then the networks can be trained to represent this experience. The proposed approach has been tested in diagnosing aspects of student's learning style in a discovery-learning environment that aims to help students to construct the concepts of vectors in physics and mathematics. The diagnosis outcomes of the model have been compared against the recommendations of a group of five experienced teachers, and the results produced by two alternative soft computing methods. The results of our pilot study show that the neuro-fuzzy model successfully manages the inherent uncertainty of the diagnostic process; especially for marginal cases, i.e. where it is very difficult, even for human tutors, to diagnose and accurately evaluate students by directly synthesizing subjective and, some times, conflicting judgments
    corecore